Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Exp Brain Res ; 210(1): 67-80, 2011 Apr.
Article in English | MEDLINE | ID: mdl-21374079

ABSTRACT

Many perceptual cue combination studies have shown that humans can integrate sensory information across modalities as well as within a modality in a manner that is close to optimal. While the limits of sensory cue integration have been extensively studied in the context of perceptual decision tasks, the evidence obtained in the context of motor decisions provides a less consistent picture. Here, we studied the combination of visual and haptic information in the context of human arm movement control. We implemented a pointing task in which human subjects pointed at an invisible unknown target position whose vertical position varied randomly across trials. In each trial, we presented a haptic and a visual cue that provided noisy information about the target position half-way through the reach. We measured pointing accuracy as function of haptic and visual cue onset and compared pointing performance to the predictions of a multisensory decision model. Our model accounts for pointing performance by computing the maximum a posteriori estimate, assuming minimum variance combination of uncertain sensory cues. Synchronicity of cue onset has previously been demonstrated to facilitate the integration of sensory information. We tested this in trials in which visual and haptic information was presented with temporal disparity. We found that for our sensorimotor task temporal disparity between visual and haptic cue had no effect. Sensorimotor learning appears to use all available information and to apply the same near-optimal rules for cue combination that are used by perception.


Subject(s)
Cues , Movement/physiology , Photic Stimulation/methods , Psychomotor Performance/physiology , Space Perception/physiology , Time Perception/physiology , Adult , Female , Humans , Male , Physical Stimulation/methods , Vision Disparity/physiology , Young Adult
2.
J Vis ; 9(5): 28.1-14, 2009 May 28.
Article in English | MEDLINE | ID: mdl-19757906

ABSTRACT

We present experimental and computational evidence for the estimation of visual and proprioceptive directional information during forward, visually driven arm movements. We presented noisy directional proprioceptive and visual stimuli simultaneously and in isolation midway during a pointing movement. Directional proprioceptive stimuli were created by brief force pulses, which varied in direction and were applied to the fingertip shortly after movement onset. Subjects indicated the perceived direction of the stimulus after each trial. We measured unimodal performance in trials in which we presented only the visual or only the proprioceptive stimulus. When we presented simultaneous but conflicting bimodal information, subjects' perceived direction fell in between the visual and proprioceptive directions. We find that the judged mean orientation matched the MLE predictions but did not show the expected improvement in reliability as compared to unimodal performance. We present an alternative model (probabilistic cue switching, PCS), which is consistent with our data. According to this model, subjects base their bimodal judgments on only one of two directional cues in a given trial, with relative choice probabilities proportional to the average stimulus reliability. These results suggest that subjects based their decision on a probability mixture of both modalities without integrating information across modalities.


Subject(s)
Motion Perception/physiology , Proprioception/physiology , Psychomotor Performance/physiology , Female , Humans , Male , Models, Neurological , Sensory Thresholds , Young Adult
3.
J Neurophysiol ; 101(6): 2789-801, 2009 Jun.
Article in English | MEDLINE | ID: mdl-19357346

ABSTRACT

Berkeley suggested that "touch educates vision," that is, haptic input may be used to calibrate visual cues to improve visual estimation of properties of the world. Here, we test whether haptic input may be used to "miseducate" vision, causing observers to rely more heavily on misleading visual cues. Human subjects compared the depth of two cylindrical bumps illuminated by light sources located at different positions relative to the surface. As in previous work using judgments of surface roughness, we find that observers judge bumps to have greater depth when the light source is located eccentric to the surface normal (i.e., when shadows are more salient). Following several sessions of visual judgments of depth, subjects then underwent visuohaptic training in which haptic feedback was artificially correlated with the "pseudocue" of shadow size and artificially decorrelated with disparity and texture. Although there were large individual differences, almost all observers demonstrated integration of haptic cues during visuohaptic training. For some observers, subsequent visual judgments of bump depth were unaffected by the training. However, for 5 of 12 observers, training significantly increased the weight given to pseudocues, causing subsequent visual estimates of shape to be less veridical. We conclude that haptic information can be used to reweight visual cues, putting more weight on misleading pseudocues, even when more trustworthy visual cues are available in the scene.


Subject(s)
Cues , Depth Perception/physiology , Pattern Recognition, Visual/physiology , Touch/physiology , Visual Fields/physiology , Feedback/physiology , Humans , Judgment , Photic Stimulation/methods
SELECTION OF CITATIONS
SEARCH DETAIL
...