Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
Add more filters










Database
Language
Publication year range
1.
Cereb Cortex ; 25(9): 2907-18, 2015 Sep.
Article in English | MEDLINE | ID: mdl-24794918

ABSTRACT

Recent evidence suggests an interaction between the ventral visual-perceptual and dorsal visuo-motor brain systems during the course of object recognition. However, the precise function of the dorsal stream for perception remains to be determined. The present study specified the functional contribution of the visuo-motor system to visual object recognition using functional magnetic resonance imaging and event-related potential (ERP) during action priming. Primes were movies showing hands performing an action with an object with the object being erased, followed by a manipulable target object, which either afforded a similar or a dissimilar action (congruent vs. incongruent condition). Participants had to recognize the target object within a picture-word matching task. Priming-related reductions of brain activity were found in frontal and parietal visuo-motor areas as well as in ventral regions including inferior and anterior temporal areas. Effective connectivity analyses suggested functional influences of parietal areas on anterior temporal areas. ERPs revealed priming-related source activity in visuo-motor regions at about 120 ms and later activity in the ventral stream at about 380 ms. Hence, rapidly initiated visuo-motor processes within the dorsal stream functionally contribute to visual object recognition in interaction with ventral stream processes dedicated to visual analysis and semantic integration.


Subject(s)
Brain Mapping , Motor Cortex/physiology , Movement , Pattern Recognition, Visual/physiology , Recognition, Psychology/physiology , Visual Cortex/physiology , Adult , Electroencephalography , Evoked Potentials/physiology , Female , Humans , Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Male , Models, Neurological , Motor Cortex/blood supply , Oxygen/blood , Photic Stimulation , Visual Cortex/blood supply , Young Adult
2.
Neuroimage ; 60(2): 1063-72, 2012 Apr 02.
Article in English | MEDLINE | ID: mdl-22001262

ABSTRACT

Behaviourally, humans have been shown to integrate multisensory information in a statistically-optimal fashion by averaging the individual unisensory estimates according to their relative reliabilities. This form of integration is optimal in that it yields the most reliable (i.e. least variable) multisensory percept. The present study investigates the neural mechanisms underlying integration of visual and tactile shape information at the macroscopic scale of the regional BOLD response. Observers discriminated the shapes of ellipses that were presented bimodally (visual-tactile) or visually alone. A 2 × 5 factorial design manipulated (i) the presence vs. absence of tactile shape information and (ii) the reliability of the visual shape information (five levels). We then investigated whether regional activations underlying tactile shape discrimination depended on the reliability of visual shape. Indeed, in primary somatosensory cortices (bilateral BA2) and the superior parietal lobe the responses to tactile shape input were increased when the reliability of visual shape information was reduced. Conversely, tactile inputs suppressed visual activations in the right posterior fusiform gyrus, when the visual signal was blurred and unreliable. Somatosensory and visual cortices may sustain integration of visual and tactile shape information either via direct connections from visual areas or top-down effects from higher order parietal areas.


Subject(s)
Brain/physiology , Form Perception/physiology , Nerve Net/physiology , Touch Perception/physiology , Visual Perception/physiology , Adult , Female , Humans , Magnetic Resonance Imaging , Male , Young Adult
3.
J Cogn Neurosci ; 23(8): 1864-74, 2011 Aug.
Article in English | MEDLINE | ID: mdl-20617882

ABSTRACT

Perception and action are classically thought to be supported by functionally and neuroanatomically distinct mechanisms. However, recent behavioral studies using an action priming paradigm challenged this view and showed that action representations can facilitate object recognition. This study determined whether action representations influence object recognition during early visual processing stages, that is, within the first 150 msec. To this end, the time course of brain activation underlying such action priming effects was examined by recording ERPs. Subjects were sequentially presented with two manipulable objects (e.g., tools), which had to be named. In the congruent condition, both objects afforded similar actions, whereas dissimilar actions were afforded in the incongruent condition. In order to test the influence of the prime modality on action priming, the first object (prime) was presented either as picture or as word. We found an ERP effect of action priming over the central scalp as early as 100 msec after target onset for pictorial, but not for verbal primes. A later action priming effect on the N400 ERP component known to index semantic integration processes was obtained for both picture and word primes. The early effect was generated in a fronto-parietal motor network, whereas the late effect reflected activity in anterior temporal areas. The present results indicate that action priming influences object recognition through both fast and slow pathways: Action priming affects rapid visuomotor processes only when elicited by pictorial prime stimuli. However, it also modulates comparably slow conceptual integration processes independent of the prime modality.


Subject(s)
Brain Mapping , Pattern Recognition, Visual/physiology , Recognition, Psychology , Adult , Analysis of Variance , Electroencephalography , Evoked Potentials, Visual/physiology , Female , Functional Laterality , Humans , Male , Photic Stimulation/methods , Reaction Time/physiology , Time Factors , Young Adult
4.
Exp Brain Res ; 200(3-4): 251-8, 2010 Jan.
Article in English | MEDLINE | ID: mdl-19669130

ABSTRACT

Observing an action activates action representations in the motor system. Moreover, the representations of manipulable objects are closely linked to the motor systems at a functional and neuroanatomical level. Here, we investigated whether action observation can facilitate object recognition using an action priming paradigm. As prime stimuli we presented short video movies showing hands performing an action in interaction with an object (where the object itself was always removed from the video). The prime movie was followed by a (briefly presented) target object affording motor interactions that are either similar (congruent condition) or dissimilar (incongruent condition) to the prime action. Participants had to decide whether an object name shown after the target picture corresponds with the picture or not (picture-word matching task). We found superior accuracy for prime-target pairs with congruent as compared to incongruent actions across two experiments. Thus, action observation can facilitate recognition of a manipulable object typically involving a similar action. This action priming effect supports the notion that action representations play a functional role in object recognition.


Subject(s)
Pattern Recognition, Visual/physiology , Psychomotor Performance/physiology , Recognition, Psychology/physiology , Adult , Discrimination Learning , Female , Humans , Male , Neuropsychological Tests , Observation/methods , Photic Stimulation/methods , Reaction Time/physiology , Young Adult
5.
J Vis ; 8(1): 21.1-16, 2008 Jan 31.
Article in English | MEDLINE | ID: mdl-18318624

ABSTRACT

Some object properties (e.g., size, shape, and depth information) are perceived through multiple sensory modalities. Such redundant sensory information is integrated into a unified percept. The integrated estimate is a weighted average of the sensory estimates, where higher weight is attributed to the more reliable sensory signal. Here we examine whether modality-specific attention can affect multisensory integration. Selectively reducing attention in one sensory channel can reduce the relative reliability of the estimate derived from this channel and might thus alter the weighting of the sensory estimates. In the present study, observers performed unimodal (visual and haptic) and bimodal (visual-haptic) size discrimination tasks. They either performed the primary task alone or they performed a secondary task simultaneously (dual task). The secondary task consisted of a same/different judgment of rapidly presented visual letter sequences, and so might be expected to withdraw attention predominantly from the visual rather than the haptic channel. Comparing size discrimination performance in single- and dual-task conditions, we found that vision-based estimates were more affected by the secondary task than the haptics-based estimates, indicating that indeed attention to vision was more reduced than attention to haptics. This attentional manipulation, however, did not affect the cue weighting in the bimodal task. Bimodal discrimination performance was better than unimodal performance in both single- and dual-task conditions, indicating that observers still integrate visual and haptic size information in the dual-task condition, when attention is withdrawn from vision. These findings indicate that visual-haptic cue weighting is independent of modality-specific attention.


Subject(s)
Attention/physiology , Discrimination, Psychological/physiology , Visual Perception/physiology , Adolescent , Adult , Female , Humans , Photic Stimulation , Sensory Thresholds/physiology
6.
Exp Brain Res ; 179(4): 595-606, 2007 Jun.
Article in English | MEDLINE | ID: mdl-17225091

ABSTRACT

Many tasks can be carried out by using several sources of information. For example, an object's size and shape can be judged based on visual as well as haptic cues. It has been shown recently that human observers integrate visual and haptic size information in a statistically optimal fashion, in the sense that the integrated estimate is most reliable (Ernst and Banks in Nature 415:429-433, 2002). In the present study, we tested whether this holds also for visual and haptic shape information. In previous studies virtual stimuli were used to test for optimality in integration. Virtual displays may, however, contain additional inappropriate cues that provide conflicting information and thus affect cue integration. Therefore, we studied optimal integration using real objects. Furthermore, we presented visual information via mirrors to create a spatial separation between visual and haptic cues while observers saw their hand touching the object and thus, knew that they were seeing and feeling the same object. Does this knowledge promote integration even though signals are spatially discrepant which has been shown to lead to a breakdown of integration (Gepshtein et al. in J Vis 5:1013-1023, 2005)? Consistent with the model predictions, observers weighted visual and haptic cues to shape according to their reliability: progressively more weight was given to haptics when visual information became less reliable. Moreover, the integrated visual-haptic estimate was more reliable than either unimodal estimate. These findings suggest that observers integrate visual and haptic shape information of real 3D objects. Thereby, knowledge that multisensory signals arise from the same object seems to promote integration.


Subject(s)
Form Perception/physiology , Recognition, Psychology/physiology , Space Perception/physiology , Touch/physiology , Adolescent , Adult , Cues , Female , Hand/physiology , Humans , Male , Mechanoreceptors/physiology , Neuropsychological Tests , Photic Stimulation , Physical Stimulation , Visual Fields/physiology
7.
Perception ; 36(10): 1523-33, 2007.
Article in English | MEDLINE | ID: mdl-18265835

ABSTRACT

The brain integrates object information from multiple sensory systems to form a unique representation of our environment. Temporal synchrony and spatial coincidence are important factors for multisensory integration, indicating that the multisensory signals come from a common source. Spatial separations can lead to a decline of visual-haptic integration (Gepshtein et al, 2005 Journal of Vision 5 1013-1023). Here we tested whether prior knowledge that two signals arise from the same object can promote integration even when the signals are spatially discrepant. In one condition, participants had direct view of the object they touched. In a second condition, mirrors were used to create a spatial separation between the seen and the felt object. Participants saw the mirror and their hand in the mirror exploring the object and thus knew that they were seeing and touching the same object. To determine the visual-haptic interaction we created a conflict between the seen and the felt shape using an optically distorting lens that made a rectangle look like a square. Participants judged the shape of the probe by selecting a comparison object matching in shape. We found a mutual biasing effect of shape information from vision and touch, independent of whether participants directly looked at the object they touched or whether the seen and the felt object information was spatially separated with the aid of a mirror. This finding suggests that prior knowledge about object identity can promote integration, even when information from vision and touch is provided at spatially discrepant locations.


Subject(s)
Form Perception/physiology , Recognition, Psychology/physiology , Touch/physiology , Visual Perception/physiology , Adolescent , Adult , Female , Humans , Male , Middle Aged , Statistics as Topic
8.
Exp Brain Res ; 174(2): 221-8, 2006 Sep.
Article in English | MEDLINE | ID: mdl-16636796

ABSTRACT

It is typically assumed that perception for action and object recognition are subserved by functionally and neuroanatomically distinct processing streams in the brain. However, recent evidence challenges this classical view and suggests an interaction between both visual processing streams. While previous studies showed an influence of object perception on action-related tasks, we investigated whether action representations facilitate visual object recognition. In order to address this question, two briefly displayed masked objects were sequentially presented, either affording congruent or incongruent motor interactions. We found superior naming accuracy for object pairs with congruent as compared to incongruent motor interactions (Experiment 1). This action priming effect indicates that action representations can facilitate object recognition. We further investigated the nature of the representations underlying this action priming effect. The effect was absent when the prime stimulus was presented as a word (Experiment 2). Thus, the action priming effect seems to rely on action representations specified by visual object information. Our findings suggest that processes of object-directed action influence object recognition.


Subject(s)
Brain/physiology , Motion Perception/physiology , Pattern Recognition, Visual/physiology , Visual Pathways/physiology , Adolescent , Adult , Female , Humans , Male , Middle Aged , Movement/physiology , Neuropsychological Tests , Photic Stimulation , Psychomotor Performance/physiology
SELECTION OF CITATIONS
SEARCH DETAIL
...