Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 10 de 10
Filter
Add more filters










Publication year range
1.
J Neurophysiol ; 130(4): 1028-1040, 2023 10 01.
Article in English | MEDLINE | ID: mdl-37701952

ABSTRACT

When humans walk, it is important for them to have some measure of the distance they have traveled. Typically, many cues from different modalities are available, as humans perceive both the environment around them (for example, through vision and haptics) and their own walking. Here, we investigate the contribution of visual cues and nonvisual self-motion cues to distance reproduction when walking on a treadmill through a virtual environment by separately manipulating the speed of a treadmill belt and of the virtual environment. Using mobile eye tracking, we also investigate how our participants sampled the visual information through gaze. We show that, as predicted, both modalities affected how participants (N = 28) reproduced a distance. Participants weighed nonvisual self-motion cues more strongly than visual cues, corresponding also to their respective reliabilities, but with some interindividual variability. Those who looked more toward those parts of the visual scene that contained cues to speed and distance tended also to weigh visual information more strongly, although this correlation was nonsignificant, and participants generally directed their gaze toward visually informative areas of the scene less than expected. As measured by motion capture, participants adjusted their gait patterns to the treadmill speed but not to walked distance. In sum, we show in a naturalistic virtual environment how humans use different sensory modalities when reproducing distances and how the use of these cues differs between participants and depends on information sampling.NEW & NOTEWORTHY Combining virtual reality with treadmill walking, we measured the relative importance of visual cues and nonvisual self-motion cues for distance reproduction. Participants used both cues but put more weight on self-motion; weight on visual cues had a trend to correlate with looking at visually informative areas. Participants overshot distances, especially when self-motion was slow; they adjusted steps to self-motion cues but not to visual cues. Our work thus quantifies the multimodal contributions to distance reproduction.


Subject(s)
Motion Perception , Virtual Reality , Humans , Cues , Walking , Gait
2.
Exp Brain Res ; 241(3): 765-780, 2023 Mar.
Article in English | MEDLINE | ID: mdl-36725725

ABSTRACT

Walking is a complex task. To prevent falls and injuries, gait needs to constantly adjust to the environment. This requires information from various sensory systems; in turn, moving through the environment continuously changes available sensory information. Visual information is available from a distance, and therefore most critical when negotiating difficult terrain. To effectively sample visual information, humans adjust their gaze to the terrain or-in laboratory settings-when facing motor perturbations. During activities of daily living, however, only a fraction of sensory and cognitive resources can be devoted to ensuring safe gait. How do humans deal with challenging walking conditions when they face high cognitive load? Young, healthy participants (N = 24) walked on a treadmill through a virtual, but naturalistic environment. Occasionally, their gait was experimentally perturbed, inducing slipping. We varied cognitive load by asking participants in some blocks to count backward in steps of seven; orthogonally, we varied whether visual cues indicated upcoming perturbations. We replicated earlier findings on how humans adjust their gaze and their gait rapidly and flexibly on various time scales: eye and head movements responded in a partially compensatory pattern and visual cues mostly affected eye movements. Interestingly, the cognitive task affected mainly head orientation. During the cognitive task, we found no clear signs of a less stable gait or of a cautious gait mode, but evidence that participants adapted their gait less to the perturbations than without secondary task. In sum, cognitive load affects head orientation and impairs the ability to adjust to gait perturbations.


Subject(s)
Activities of Daily Living , Cognition , Humans , Gait , Walking/psychology , Cues
3.
J Vis ; 21(8): 11, 2021 08 02.
Article in English | MEDLINE | ID: mdl-34351396

ABSTRACT

Most humans can walk effortlessly across uniform terrain even when they do not pay much attention to it. However, most natural terrain is far from uniform, and we need visual information to maintain stable gait. Recent advances in mobile eye-tracking technology have made it possible to study, in natural environments, how terrain affects gaze and thus the sampling of visual information. However, natural environments provide only limited experimental control, and some conditions cannot safely be tested. Typical laboratory setups, in contrast, are far from natural settings for walking. We used a setup consisting of a dual-belt treadmill, 240\(^\circ\) projection screen, floor projection, three-dimensional optical motion tracking, and mobile eye tracking to investigate eye, head, and body movements during perturbed and unperturbed walking in a controlled yet naturalistic environment. In two experiments (N = 22 each), we simulated terrain difficulty by repeatedly inducing slipping through accelerating either of the two belts rapidly and unpredictably (Experiment 1) or sometimes following visual cues (Experiment 2). We quantified the distinct roles of eye and head movements for adjusting gaze on different time scales. While motor perturbations mainly influenced head movements, eye movements were primarily affected by the presence of visual cues. This was true both immediately following slips and-to a lesser extent-over the course of entire 5-min blocks. We find adapted gaze parameters already after the first perturbation in each block, with little transfer between blocks. In conclusion, gaze-gait interactions in experimentally perturbed yet naturalistic walking are adaptive, flexible, and effector specific.


Subject(s)
Gait , Walking , Adaptation, Physiological , Eye Movements , Head Movements , Humans
4.
Psychol Res ; 83(1): 147-158, 2019 Feb.
Article in English | MEDLINE | ID: mdl-30259095

ABSTRACT

The perceived distance of objects is biased depending on the distance from the observer at which objects are presented, such that the egocentric distance tends to be overestimated for closer objects, but underestimated for objects further away. This leads to the perceived depth of an object (i.e., the perceived distance from the front to the back of the object) also being biased, decreasing with object distance. Several studies have found the same pattern of biases in grasping tasks. However, in most of those studies, object distance and depth were solely specified by ocular vergence and binocular disparities. Here we asked whether grasping objects viewed from above would eliminate distance-dependent depth biases, since this vantage point introduces additional information about the object's distance, given by the vertical gaze angle, and its depth, given by contour information. Participants grasped objects presented at different distances (1) at eye-height and (2) 130 mm below eye-height, along their depth axes. In both cases, grip aperture was systematically biased by the object distance along most of the trajectory. The same bias was found whether the objects were seen in isolation or above a ground plane to provide additional depth cues. In two additional experiments, we verified that a consistent bias occurs in a perceptual task. These findings suggest that grasping actions are not immune to biases typically found in perceptual tasks, even when additional cues are available. However, online visual control can counteract these biases when direct vision of both digits and final contact points is available.


Subject(s)
Cues , Distance Perception/physiology , Hand Strength/physiology , Movement/physiology , Psychomotor Performance/physiology , Adult , Female , Humans , Male , Young Adult
5.
Exp Brain Res ; 236(5): 1309-1320, 2018 May.
Article in English | MEDLINE | ID: mdl-29502246

ABSTRACT

Manual estimates without vision of the hand are thought to constitute a form of cross-modal matching between stimulus size and finger opening. However, few investigations have systematically looked at how manual estimates relate a perceived size to the response across different ranges of stimuli. In two experiments (N = 18 and N = 14), we sought to map out the response properties for (1) manual estimates of visually presented stimuli as well as (2) visual estimates of proprioceptive stimuli, and to test whether these properties depend on the range of stimuli. We also looked at whether scalar variability is present in manual estimates, as predicted by Weber's Law for perceptual tasks. We found that manual estimates scale linearly and with a slope of close to 1 with object sizes up to 90 mm, before participants' hand size limited their responses. In contrast, we found a shallower response slope of about 0.7 when participants performed the inverse task, adjusting the size of a visual object to match a not actively chosen, induced finger opening. Our results were mixed with regards to scalar variability in large objects. We saw some indication of a plateau, but no evidence for an effect of mechanical constraints in the range studied (up to 90 mm). Participants also showed a clear tendency to overestimate small differences when a set of objects differed little in size, but not when stimulus differences were more pronounced.


Subject(s)
Fingers/physiology , Proprioception/physiology , Psychomotor Performance/physiology , Reaction Time/physiology , Size Perception/physiology , Visual Perception/physiology , Adult , Female , Humans , Male , Young Adult
6.
Adv Cogn Psychol ; 14(3): 87-100, 2018.
Article in English | MEDLINE | ID: mdl-32256903

ABSTRACT

The mental number line (MNL) is a popular metaphor for magnitude representation in numerical cognition. Its shape has frequently been reported as being nonlinear, based on nonlinear response functions in magnitude estimation. We investigated whether this shape reflects a phenomenon of the mapping from stimulus to internal magnitude representation or of the mapping from internal representation to response. In five experiments, participants (total N = 66) viewed stimuli that represented numerical magnitude either in a symbolic notation (i.e., Arabic digits) or in a nonsymbolic notation (i.e., clouds of dots). Participants estimated these magnitudes by either adjusting the position of a mark on a ruler-like response bar (nonsymbolic response) or by typing the corresponding number on a keyboard (symbolic response). Responses to symbolic stimuli were markedly different from responses to nonsymbolic stimuli, in that they were mostly powershaped. We investigated whether the nonlinearity could be explained by effects of previous trials, but such effects were (a) not strong enough to explain the nonlinear responses and (b) existed only between trials of the same input notation, suggesting that the nonlinearity is due to input mappings. Introducing veridical feedback improved the accuracy of responses, thereby showing a calibration based on the feedback. However, this calibration persisted only temporarily, and responses to nonsymbolic stimuli remained nonlinear. Overall, we conclude that the nonlinearity is a phenomenon of the mapping from nonsymbolic input format to internal magnitude representation and that the phenomenon is surprisingly robust to calibration.

7.
Vision Res ; 136: 21-31, 2017 07.
Article in English | MEDLINE | ID: mdl-28571701

ABSTRACT

Recent results have shown that effects of pictorial illusions in grasping may decrease over the course of an experiment. This can be explained as an effect of sensorimotor learning if we consider a pictorial size illusion as simply a perturbation of visually perceived size. However, some studies have reported very constant illusion effects over trials. In the present paper, we apply an error-correction model of adaptation to experimental data of N=40 participants grasping the Müller-Lyer illusion. Specifically, participants grasped targets embedded in incremental and decremental Müller-Lyer illusion displays in (1) the same block in pseudo-randomised order, and (2) separate blocks of only one type of illusion each. Consistent with predictions of our model, we found an effect of interference between the two types when they were presented intermixed, explaining why adaptation rates may vary depending on the experimental design. We also systematically varied the number of object sizes per block, which turned out to have no effect on the rate of adaptation. This was also in accordance with our model. We discuss implications for the illusion literature, and lay out how error-correction models can explain perception-action dissociations in some, but not all grasping-of-illusion paradigms in a parsimonious and plausible way, without assuming different illusion effects.


Subject(s)
Adaptation, Ocular/physiology , Hand Strength/physiology , Illusions , Psychomotor Performance/physiology , Adolescent , Adult , Female , Humans , Learning , Male , Visual Perception/physiology , Young Adult
9.
PLoS One ; 11(9): e0163897, 2016.
Article in English | MEDLINE | ID: mdl-27684956

ABSTRACT

The SNARC effect refers to an association of numbers and spatial properties of responses that is commonly thought to be amodal and independent of stimulus notation. We tested for a horizontal SNARC effect using Arabic digits, simple-form Chinese characters and Chinese hand signs in participants from Mainland China. We found a horizontal SNARC effect in all notations. This is the first time that a horizontal SNARC effect has been demonstrated in Chinese characters and Chinese hand signs. We tested for the SNARC effect in two experiments (parity judgement and magnitude judgement). The parity judgement task yielded clear, consistent SNARC effects in all notations, whereas results were more mixed in magnitude judgement. Both Chinese characters and Chinese hand signs are represented non-symbolically for low numbers and symbolically for higher numbers, allowing us to contrast within the same notation the effects of heavily learned non-symbolic vs. symbolic representation on the processing of numbers. In addition to finding a horizontal SNARC effect, we also found a robust numerical distance effect in all notations. This is particularly interesting as it persisted when participants reported using purely visual features to solve the task, thereby suggesting that numbers were processed semantically even when the task could be solved without the semantic information.

10.
Cortex ; 79: 130-52, 2016 06.
Article in English | MEDLINE | ID: mdl-27156056

ABSTRACT

It has often been suggested that visual illusions affect perception but not actions such as grasping, as predicted by the "two-visual-systems" hypothesis of Milner and Goodale (1995, The Visual Brain in Action, Oxford University press). However, at least for the Ebbinghaus illusion, relevant studies seem to reveal a consistent illusion effect on grasping (Franz & Gegenfurtner, 2008. Grasping visual illusions: consistent data and no dissociation. Cognitive Neuropsychology). Two interpretations are possible: either grasping is not immune to illusions (arguing against dissociable processing mechanisms for vision-for-perception and vision-for-action), or some other factors modulate grasping in ways that mimic a vision-for perception effect in actions. It has been suggested that one such factor may be obstacle avoidance (Haffenden Schiff & Goodale, 2001. The dissociation between perception and action in the Ebbinghaus illusion: nonillusory effects of pictorial cues on grasp. Current Biology, 11, 177-181). In four different labs (total N = 144), we conducted an exact replication of previous studies suggesting obstacle avoidance mechanisms, implementing conditions that tested grasping as well as multiple perceptual tasks. This replication was supplemented by additional conditions to obtain more conclusive results. Our results confirm that grasping is affected by the Ebbinghaus illusion and demonstrate that this effect cannot be explained by obstacle avoidance.


Subject(s)
Optical Illusions/physiology , Psychomotor Performance/physiology , Vision, Ocular/physiology , Visual Cortex/physiology , Visual Perception/physiology , Hand Strength/physiology , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...