Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 24
Filter
Add more filters










Publication year range
1.
J Vis ; 24(4): 12, 2024 Apr 01.
Article in English | MEDLINE | ID: mdl-38625089

ABSTRACT

In European painting, a transition took place where artists started to consciously introduce blurred or soft contours in their works. There may have been several reasons for this. One suggestion in art historical literature is that this may have been done to create a stronger sense of volume in the depicted figures or objects. Here we describe four experiments in which we tried to test whether soft or blurred contours do indeed enhance a sense volume or depth. In the first three experiments, we found that, for both paintings and abstract shapes, three dimensionality was actually decreased instead of increased for blurred (and line) contours, in comparison with sharp contours. In the last experiment, we controlled for the position of the blur (on the lit or dark side) and found that blur on the lit side evoked a stronger impression of three dimensionality. Overall, the experiments robustly show that an art historical conjecture that a blurred contour increases three dimensionality is not granted. Because the blurred contours can be found in many established art works such as from Leonardo and Vermeer, there must be other rationales behind this use than the creation of a stronger sense of volume or depth.


Subject(s)
Paintings , Humans , Perception
2.
Cognition ; 237: 105465, 2023 08.
Article in English | MEDLINE | ID: mdl-37150154

ABSTRACT

Weber's law, the principle that the uncertainty of perceptual estimates increases proportionally with object size, is regularly violated when considering the uncertainty of the grip aperture during grasping movements. The origins of this perception-action dissociation are debated and are attributed to various reasons, including different coding of visual size information for perception and action, biomechanical factors, the use of positional information to guide grasping, or, sensorimotor calibration. Here, we contrasted these accounts and compared perceptual and grasping uncertainties by asking people to indicate the visually perceived center of differently sized objects (Perception condition) or to grasp and lift the same objects with the requirement to achieve a balanced lift (Action condition). We found that the variability (uncertainty) of contact positions increased as a function of object size in both perception and action. The adherence of the Action condition to Weber's law and the consequent absence of a perception-action dissociation contradict the predictions based on different coding of visual size information and sensorimotor calibration. These findings provide clear evidence that human perceptual and visuomotor systems rely on the same visual information and suggest that the previously reported violations of Weber's law in grasping movements should be attributed to other factors.


Subject(s)
Psychomotor Performance , Visual Perception , Humans , Differential Threshold , Movement , Hand Strength
3.
eNeuro ; 2022 May 31.
Article in English | MEDLINE | ID: mdl-35641223

ABSTRACT

Human multisensory grasping movements (i.e., seeing and feeling a handheld object while grasping it with the contralateral hand) are superior to movements guided by each separate modality. This multisensory advantage might be driven by the integration of vision with either the haptic position only or with both position and size cues. To contrast these two hypotheses, we manipulated visual uncertainty (central vs. peripheral vision) and the availability of haptic cues during multisensory grasping. We showed a multisensory benefit irrespective of the degree of visual uncertainty suggesting that the integration process involved in multisensory grasping can be flexibly modulated by the contribution of each modality. Increasing visual uncertainty revealed the role of the distinct haptic cues. The haptic position cue was sufficient to promote multisensory benefits evidenced by faster actions with smaller grip apertures, whereas the haptic size was fundamental in fine-tuning the grip aperture scaling. These results support the hypothesis that, in multisensory grasping, vision is integrated with all haptic cues, with the haptic position cue playing the key part. Our findings highlight the important role of non-visual sensory inputs in sensorimotor control and hint at the potential contributions of the haptic modality in developing and maintaining visuomotor functions.Significance statementThe longstanding view that vision is considered the primary sense we rely on to guide grasping movements relegates the equally important haptic inputs, such as touch and proprioception, to a secondary role. Here we show that by increasing visual uncertainty during visuo-haptic grasping, the central nervous system exploits distinct haptic inputs about the object position and size to optimize grasping performance. Specifically, we demonstrate that haptic inputs about the object position are fundamental to support vision in enhancing grasping performance, whereas haptic size inputs can further refine hand shaping. Our results provide strong evidence that non-visual inputs serve an important, previously under-appreciated, functional role in grasping.

4.
Vision Res ; 185: 50-57, 2021 08.
Article in English | MEDLINE | ID: mdl-33895647

ABSTRACT

Goal-directed aiming movements toward visuo-haptic targets (i.e., seen and handheld targets) are generally more precise than those toward visual only or haptic only targets. This multisensory advantage stems from a continuous inflow of haptic and visual target information during the movement planning and execution phases. However, in everyday life, multisensory movements often occur without the support of continuous visual information. Here we investigated whether and to what extent limiting visual information to the initial stage of the action still leads to a multisensory advantage. Participants were asked to reach a handheld target while vision was briefly provided during the movement planning phase (50 ms, 100 ms, 200 ms of vision before movement onset), or during the planning and early execution phases (400 ms of vision), or during the entire movement. Additional conditions were performed in which only haptic target information was provided, or, only vision was provided either briefly (50 ms, 100 ms, 200 ms, 400 ms) or throughout the entire movement. Results showed that 50 ms of vision before movement onset were sufficient to trigger a direction-specific visuo-haptic integration process that increased endpoint precision. We conclude that, when a continuous support of vision is not available, endpoint precision is determined by the less recent, but most reliable multisensory information rather than by the latest unisensory (haptic) inputs.


Subject(s)
Haptic Technology , Touch Perception , Humans , Movement , Vision, Ocular , Visual Perception
5.
Cortex ; 135: 173-185, 2021 02.
Article in English | MEDLINE | ID: mdl-33383479

ABSTRACT

Grasping actions are directed not only toward objects we see but also toward objects we both see and touch (multisensory grasping). In this latter case, the integration of visual and haptic inputs improves movement performance compared to each sense alone. This performance advantage could be due to the integration of all the redundant positional and size cues or to the integration of only a subset of these cues. Here we selectively provided specific cues to tease apart how these different sensory sources contribute to visuo-haptic multisensory grasping. We demonstrate that the availability of the haptic positional cue together with the visual cues is sufficient to achieve the same grasping performance as when all cues are available. These findings provide strong evidence that the human sensorimotor system relies on non-visual sensory inputs and open new perspectives on their role in supporting vision during both development and adulthood.


Subject(s)
Touch Perception , Visual Perception , Adult , Hand Strength , Humans , Touch , Vision, Ocular
6.
J Neurophysiol ; 122(6): 2614-2620, 2019 12 01.
Article in English | MEDLINE | ID: mdl-31693442

ABSTRACT

Haptics provides information about the size and position of a handheld object. However, it is still unknown how haptics contributes to action correction if a sudden perturbation causes a change in the configuration of the handheld object. In this study, we have occasionally perturbed the size of an object that was the target of a right-hand reach-to-grasp movement. In some cases, participants were holding the target object with their left hand, which provided haptic information about the object perturbation. We compared the corrective responses to perturbations in three different sensory conditions: visual (participants had full vision of the object, but haptic information from the left hand was prevented), haptic (object size was sensed by the left hand and vision was prevented), and visuo-haptic (both visual and haptic information were available throughout the movement). We found that haptic inputs evoked faster contralateral corrections than visual inputs, although actions in haptic and visual conditions were similar in movement duration. Strikingly, the corrective responses in the visuo-haptic condition were as fast as those found in the haptic condition, a result that is contrary to that predicted by simple summation of unisensory signals. These results suggest the existence of a haptomotor reflex that can trigger automatic and efficient grasping corrections of the contralateral hand that are faster than those initiated by the well-known visuomotor reflex and the tactile-motor reflex.NEW & NOTEWORTHY We show that online grip aperture corrections during grasping actions are contingent on the sensory modality used to detect the object perturbation. We found that sensing perturbations with the contralateral hand only (haptics) leads to faster action corrections than when object perturbations are only visually sensed. Moreover, corrections following visuo-haptic perturbations were as fast as those to haptic perturbations. Thus a haptomotor reflex triggers faster automatic responses than the visuomotor reflex.


Subject(s)
Hand/physiology , Motor Activity/physiology , Psychomotor Performance/physiology , Reflex/physiology , Touch Perception/physiology , Visual Perception/physiology , Adolescent , Adult , Female , Humans , Male , Young Adult
7.
Sci Rep ; 9(1): 3665, 2019 03 06.
Article in English | MEDLINE | ID: mdl-30842478

ABSTRACT

Grasping movements are typically performed toward visually sensed objects. However, planning and execution of grasping movements can be supported also by haptic information when we grasp objects held in the other hand. In the present study we investigated this sensorimotor integration process by comparing grasping movements towards objects sensed through visual, haptic or visuo-haptic signals. When movements were based on haptic information only, hand preshaping was initiated earlier, the digits closed on the object more slowly, and the final phase was more cautious compared to movements based on only visual information. Importantly, the simultaneous availability of vision and haptics led to faster movements and to an overall decrease of the grip aperture. Our findings also show that each modality contributes to a different extent in different phases of the movement, with haptics being more crucial in the initial phases and vision being more important for the final on-line control. Thus, vision and haptics can be flexibly combined to optimize the execution of grasping movement.


Subject(s)
Hand Strength/physiology , Touch Perception/physiology , Biomechanical Phenomena , Female , Hand , Humans , Male , Motor Activity , Nontherapeutic Human Experimentation , Visual Perception/physiology , Wrist/physiology , Young Adult
8.
J Vis ; 19(2): 4, 2019 02 01.
Article in English | MEDLINE | ID: mdl-30725097

ABSTRACT

Although product photos and movies are abundantly present in online shopping environments, little is known about how much of the real product experience they capture. While previous studies have shown that movies or interactive imagery give users the impression that these communication forms are more effective, there are no studies addressing this issue quantitatively. We used nine different samples of jeans, because in general fabrics represent a large and interesting product category and specifically because jeans can visually be rather similar while haptically be rather different. In the first experiment we let observers match a haptic stimulus to a visual representation and found that movies were more informative about how objects would feel than photos. In a second experiment we wanted to confirm this finding by using a different experimental paradigm that we deemed a better general paradigm for future studies on this topic: correlations of pairwise similarity ratings. However, the beneficial effect of the movies was absent when using this new paradigm. In the third experiment we investigated this issue by letting people visually observe other people in making haptic similarity judgments. Here, we did find a significant correlation between haptic and visual data. Together, the three experiments suggest that there is a small but significant effect of movies over photos (Experiment 1) but at the same time a significant difference between visual representations and visually perceiving products in reality (Experiments 2 and 3). This finding suggests a substantial theoretical potential for decreasing the gap between virtual and real product presentation.


Subject(s)
Cell Communication/physiology , Motion Pictures , Textiles , Visual Perception/physiology , Adult , Female , Humans , Male , Young Adult
9.
Psychol Res ; 83(1): 147-158, 2019 Feb.
Article in English | MEDLINE | ID: mdl-30259095

ABSTRACT

The perceived distance of objects is biased depending on the distance from the observer at which objects are presented, such that the egocentric distance tends to be overestimated for closer objects, but underestimated for objects further away. This leads to the perceived depth of an object (i.e., the perceived distance from the front to the back of the object) also being biased, decreasing with object distance. Several studies have found the same pattern of biases in grasping tasks. However, in most of those studies, object distance and depth were solely specified by ocular vergence and binocular disparities. Here we asked whether grasping objects viewed from above would eliminate distance-dependent depth biases, since this vantage point introduces additional information about the object's distance, given by the vertical gaze angle, and its depth, given by contour information. Participants grasped objects presented at different distances (1) at eye-height and (2) 130 mm below eye-height, along their depth axes. In both cases, grip aperture was systematically biased by the object distance along most of the trajectory. The same bias was found whether the objects were seen in isolation or above a ground plane to provide additional depth cues. In two additional experiments, we verified that a consistent bias occurs in a perceptual task. These findings suggest that grasping actions are not immune to biases typically found in perceptual tasks, even when additional cues are available. However, online visual control can counteract these biases when direct vision of both digits and final contact points is available.


Subject(s)
Cues , Distance Perception/physiology , Hand Strength/physiology , Movement/physiology , Psychomotor Performance/physiology , Adult , Female , Humans , Male , Young Adult
10.
Sci Rep ; 8(1): 14803, 2018 10 04.
Article in English | MEDLINE | ID: mdl-30287832

ABSTRACT

It is reasonable to assume that when we grasp an object we carry out the movement based only on the currently available sensory information. Unfortunately, our senses are often prone to err. Here, we show that the visuomotor system exploits the mismatch between the predicted and sensory outcomes of the immediately preceding action (sensory prediction error) to attain a degree of robustness against the fallibility of our perceptual processes. Participants performed reach-to-grasp movements toward objects presented at eye level at various distances. Grip aperture was affected by the object distance, even though both visual feedback of the hand and haptic feedback were provided. Crucially, grip aperture as well as the trajectory of the hand were systematically influenced also by the immediately preceding action. These results are well predicted by a model that modifies an internal state of the visuomotor system by adjusting the visuomotor mapping based on the sensory prediction errors. In sum, the visuomotor system appears to be in a constant fine-tuning process which makes the generation and control of grasping movements more resistant to interferences caused by our perceptual errors.


Subject(s)
Hand Strength , Psychomotor Performance , Adult , Female , Healthy Volunteers , Humans , Male , Models, Neurological , Young Adult
11.
J Neurosci Methods ; 308: 129-134, 2018 10 01.
Article in English | MEDLINE | ID: mdl-30009842

ABSTRACT

We present a Matlab toolbox that allows the user to control and collect data using Northern Digital's Optotrak system. The Optotrak is a modular motion capture system, which tracks the positions of infrared markers. It also supports grouping markers together as a single body. The body's position, orientation as well as all the marker position data can be obtained simultaneously. The installation, set-up and alignment procedures are highly automated, and thus require minimal human interaction. We provide additional scripts, functions, documentation and examples to help experimenters integrate the Optotrak system into experiments using recent 64-bit computers and existing Matlab toolboxes.


Subject(s)
Pattern Recognition, Automated , Software , Humans , Motion
12.
Exp Brain Res ; 236(4): 985-995, 2018 04.
Article in English | MEDLINE | ID: mdl-29399704

ABSTRACT

Our interaction with objects is facilitated by the availability of visual feedback. Here, we investigate how and when visual feedback affects the way we grasp an object. Based on the main views on grasping (reach-and-grasp and double-pointing views), we designed four experiments to test: (1) whether the availability of visual feedback influences the digits independently, and (2) whether the absence of visual feedback affects the initial part of the movement. Our results show that occluding (part of) the hand's movement path influences the movement trajectory from the beginning. Thus, people consider the available feedback when planning their movements. The influence of the visual feedback depends on which digit is occluded, but its effect is not restricted to the occluded digit. Our findings indicate that the control mechanisms are more complex than those suggested by current views on grasping.


Subject(s)
Feedback, Sensory/physiology , Motor Activity/physiology , Psychomotor Performance/physiology , Visual Perception/physiology , Adult , Humans
13.
Exp Brain Res ; 234(8): 2165-77, 2016 08.
Article in English | MEDLINE | ID: mdl-26996387

ABSTRACT

Even though it is recognized that vision plays an important role in grasping movements, it is not yet fully understood how the visual feedback of the hand contributes to the on-line control. Visual feedback could be used to shape the posture of the hand and fingers, to adjust the trajectory of the moving hand, or a combination of both. Here, we used a dynamic perturbation method that altered the position of the visual feedback relative to the actual position of the thumb and index finger to virtually increase or decrease the visually sensed grip aperture. Subjects grasped objects in a virtual 3D environment with haptic feedback and with visual feedback provided by small virtual spheres anchored to the their unseen fingertips. We found that the effects of the visually perturbed grip aperture arose preeminently late in the movement when the hand was in the object's proximity. The on-line visual feedback assisted both the scaling of the grip aperture to properly conform it to the object's dimension and the transport of the hand to correctly position the digits on the object's surface. However, the extent of these compensatory adjustments was contingent on the viewing geometry. The visual control of the actual grip aperture was mainly observed when the final grasp axis orientation was approximately perpendicular to the viewing direction. On the contrary, when the final grasp axis was aligned with the viewing direction, the visual control was predominantly concerned with the guidance of the digit toward the visible final contact point.


Subject(s)
Feedback, Sensory/physiology , Motor Activity/physiology , Psychomotor Performance/physiology , Space Perception/physiology , Visual Perception/physiology , Adult , Female , Humans , Male , Touch Perception/physiology , Young Adult
14.
Exp Brain Res ; 234(1): 255-65, 2016 Jan.
Article in English | MEDLINE | ID: mdl-26449965

ABSTRACT

Reach-to-grasp movements performed without visual and haptic feedback of the hand are subject to systematic inaccuracies. Grasps directed at an object specified by binocular information usually end at the wrong distance with an incorrect final grip aperture. More specifically, moving the target object away from the observer leads to increasingly larger undershoots and smaller grip apertures. These systematic biases suggest that the visuomotor mapping is based on inaccurate estimates of an object's egocentric distance and 3D structure that compress the visual space. Here we ask whether the appropriate visuomotor mapping can be learned through an extensive exposure to trials where haptic and visual feedback of the hand is provided. By intermixing feedback trials with test trials without feedback, we aimed at maximizing the likelihood that the motor execution of test trials is positively influenced by that of preceding feedback trials. We found that the intermittent presence of feedback trials both (1) largely reduced the positioning error of the hand with respect to the object and (2) affected the shaping of the hand before the final grasp, leading to an overall more accurate performance. While this demonstrates an effective transfer of information from feedback trials to test trials, the remaining biases indicate that a compression of visual space is still taking place. The correct visuomotor mapping, therefore, could not be learned. We speculate that an accurate reconstruction of the scene at movement onset may not actually be needed. Instead, the online monitoring of the hand position relative to the object and the final contact with the object are sufficient for a successful execution of a grasp.


Subject(s)
Feedback, Sensory/physiology , Learning/physiology , Motor Activity/physiology , Psychomotor Performance/physiology , Touch Perception/physiology , Visual Perception/physiology , Adult , Calibration , Female , Humans , Male , Young Adult
15.
J Neurophysiol ; 112(12): 3189-96, 2014 Dec 15.
Article in English | MEDLINE | ID: mdl-25231616

ABSTRACT

Perceptual estimates of three-dimensional (3D) properties, such as the distance and depth of an object, are often inaccurate. Given the accuracy and ease with which we pick up objects, it may be expected that perceptual distortions do not affect how the brain processes 3D information for reach-to-grasp movements. Nonetheless, empirical results show that grasping accuracy is reduced when visual feedback of the hand is removed. Here we studied whether specific types of training could correct grasping behavior to perform adequately even when any form of feedback is absent. Using a block design paradigm, we recorded the movement kinematics of subjects grasping virtual objects located at different distances in the absence of visual feedback of the hand and haptic feedback of the object, before and after different training blocks with different feedback combinations (vision of the thumb and vision of thumb and index finger, with and without tactile feedback of the object). In the Pretraining block, we found systematic biases of the terminal hand position, the final grip aperture, and the maximum grip aperture like those reported in perceptual tasks. Importantly, the distance at which the object was presented modulated all these biases. In the Posttraining blocks only the hand position was partially adjusted, but final and maximum grip apertures remained unchanged. These findings show that when visual and haptic feedback are absent systematic distortions of 3D estimates affect reach-to-grasp movements in the same way as they affect perceptual estimates. Most importantly, accuracy cannot be learned, even after extensive training with feedback.


Subject(s)
Depth Perception , Feedback, Sensory , Movement , Psychomotor Performance , Adult , Biomechanical Phenomena , Female , Hand , Hand Strength , Humans , Male , Young Adult
16.
Exp Brain Res ; 232(9): 2997-3005, 2014 Sep.
Article in English | MEDLINE | ID: mdl-24838554

ABSTRACT

When humans grasp an object, the thumb habitually contacts the object on the visible part, whereas the index finger makes contact with the object on the occluded part. Considering that the contact points play a critical role in object-oriented actions, we studied if and how the visibility of the points of contact for the thumb and index finger affects grasping movements. We adapted reach-to-point movements (visual feedback displacement: 150 mm in depth) performed with either the thumb or the index finger to measure how a newly learned visuomotor mapping transfers to grasping movements. We found a general transfer of adaptation from reach-to-point to reach-to-grasp movements. However, the transfer of adaptation depended on the visibility of contact points. In the first experiment, in which only the thumb's contact point was visible during grasping, the transfer of adaptation was larger after thumb than after index finger perturbation. In the second experiment, in which both contact points were equally visible, the transfer of adaptation was of similar magnitude. Furthermore, thumb trajectories were less variable than index trajectories in both experiments weakening the idea that the less variable digit is the digit that guides grasping movements. Our findings suggest that the difficulty in determining the contact points imposes specific constraints that influence how the hand is guided toward the to-be-grasped object.


Subject(s)
Feedback, Sensory/physiology , Hand Strength/physiology , Movement/physiology , Psychomotor Performance/physiology , Visual Perception/physiology , Female , Humans , Male , Reaction Time , Students , Universities
17.
J Neurosci ; 33(43): 17081-8, 2013 Oct 23.
Article in English | MEDLINE | ID: mdl-24155312

ABSTRACT

Perceptual judgments of relative depth from binocular disparity are systematically distorted in humans, despite in principle having access to reliable 3D information. Interestingly, these distortions vanish at a natural grasping distance, as if perceived stereo depth is contingent on a specific reference distance for depth-disparity scaling that corresponds to the length of our arm. Here we show that the brain's representation of the arm indeed powerfully modulates depth perception, and that this internal calibration can be quickly updated. We used a classic visuomotor adaptation task in which subjects execute reaching movements with the visual feedback of their reaching finger displaced farther in depth, as if they had a longer arm. After adaptation, 3D perception changed dramatically, and became accurate at the "new" natural grasping distance, the updated disparity scaling reference distance. We further tested whether the rapid adaptive changes were restricted to the visual modality or were characteristic of sensory systems in general. Remarkably, we found an improvement in tactile discrimination consistent with a magnified internal image of the arm. This suggests that the brain integrates sensory signals with information about arm length, and quickly adapts to an artificially updated body structure. These adaptive processes are most likely a relic of the mechanisms needed to optimally correct for changes in size and shape of the body during ontogenesis.


Subject(s)
Adaptation, Physiological , Depth Perception , Discrimination, Psychological , Psychomotor Performance/physiology , Touch Perception , Adult , Feedback, Physiological , Female , Fingers/innervation , Fingers/physiology , Humans , Male , Movement
18.
PLoS One ; 7(6): e39708, 2012.
Article in English | MEDLINE | ID: mdl-22768109

ABSTRACT

Saccades are so called ballistic movements which are executed without online visual feedback. After each saccade the saccadic motor plan is modified in response to post-saccadic feedback with the mechanism of saccadic adaptation. The post-saccadic feedback is provided by the retinal position of the target after the saccade. If the target moves after the saccade, gaze may follow the moving target. In that case, the eyes are controlled by the pursuit system, a system that controls smooth eye movements. Although these two systems have in the past been considered as mostly independent, recent lines of research point towards many interactions between them. We were interested in the question if saccade amplitude adaptation is induced when the target moves smoothly after the saccade. Prior studies of saccadic adaptation have considered intra-saccadic target steps as learning signals. In the present study, the intra-saccadic target step of the McLaughlin paradigm of saccadic adaptation was replaced by target movement, and a post-saccadic pursuit of the target. We found that saccadic adaptation occurred in this situation, a further indication of an interaction of the saccadic system and the pursuit system with the aim of optimized eye movements.


Subject(s)
Adaptation, Ocular/physiology , Movement/physiology , Saccades/physiology , Adult , Female , Humans , Male , Photic Stimulation , Young Adult
19.
Exp Brain Res ; 203(3): 621-7, 2010 Jun.
Article in English | MEDLINE | ID: mdl-20437169

ABSTRACT

The simple experience of a coherent percept while looking and touching an object conceals an intriguing issue: different senses encode and compare information in different modality-specific reference frames. We addressed this problem in a cross-modal visuo-haptic mental rotation task. Two objects in various orientations were presented at the same spatial location, one visually and one haptically. Participants had to identify the objects as same or different. The relative angle between viewing direction and hand orientation was manipulated (Aligned versus Orthogonal). In an additional condition (Delay), a temporal delay was introduced between haptic and visual explorations while the viewing direction and the hand orientation were orthogonal to each other. Whereas the phase shift of the response time function was close to 0 degrees in the Aligned condition, we observed a consistent phase shift in the hand's direction in the Orthogonal condition. A phase shift, although reduced, was also found in the Delay condition. Counterintuitively, these results mean that seen and touched objects do not need to be physically aligned for optimal performance to occur. The present results suggest that the information about an object is acquired in separate visual and hand-centered reference frames, which directly influence each other and which combine in a time-dependent manner.


Subject(s)
Rotation , Touch Perception , Visual Perception , Analysis of Variance , Hand , Humans , Imagination , Male , Neuropsychological Tests , Pattern Recognition, Physiological , Pattern Recognition, Visual , Reaction Time , Recognition, Psychology
20.
Exp Brain Res ; 193(4): 639-44, 2009 Mar.
Article in English | MEDLINE | ID: mdl-19199097

ABSTRACT

We studied the influence of haptics on visual perception of three-dimensional shape. Observers were shown pictures of an oblate spheroid in two different orientations. A gauge-figure task was used to measure their perception of the global shape. In the first two sessions only vision was used. The results showed that observers made large errors and interpreted the oblate spheroid as a sphere. They also misinterpreted the rotated oblate spheroid for a prolate spheroid. In two subsequent sessions observers were allowed to touch the stimulus while performing the task. The visual input remained unchanged: the observers were looking at the picture and could not see their hands. The results revealed that observers perceived a shape that was different from the vision-only sessions and closer to the veridical shape. Whereas, in general, vision is subject to ambiguities that arise from interpreting the retinal projection, our study shows that haptic input helps to disambiguate and reinterpret the visual input more veridically.


Subject(s)
Touch Perception , Visual Perception , Female , Hand , Humans , Male , Regression Analysis , Rotation
SELECTION OF CITATIONS
SEARCH DETAIL
...