Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 88
Filter
Add more filters










Publication year range
1.
iScience ; 27(3): 109167, 2024 Mar 15.
Article in English | MEDLINE | ID: mdl-38414862

ABSTRACT

Spatial cognition and mobility are typically impaired in congenitally blind individuals, as vision usually calibrates space perception by providing the most accurate distal spatial cues. We have previously shown that sight restoration from congenital bilateral cataracts guides the development of more accurate space perception, even when cataract removal occurs years after birth. However, late cataract-treated individuals do not usually reach the performance levels of the typically sighted population. Here, we developed a brief multisensory training that associated audiovisual feedback with body movements. Late cataract-treated participants quickly improved their space representation and mobility, performing as well as typically sighted controls in most tasks. Their improvement was comparable with that of a group of blind participants, who underwent training coupling their movements with auditory feedback alone. These findings suggest that spatial cognition can be enhanced by a training program that strengthens the association between bodily movements and their sensory feedback (either auditory or audiovisual).

2.
Sci Rep ; 13(1): 11435, 2023 07 15.
Article in English | MEDLINE | ID: mdl-37454205

ABSTRACT

The Bouba-Kiki effect is the systematic mapping between round/spiky shapes and speech sounds ("Bouba"/"Kiki"). In the size-weight illusion, participants judge the smaller of two equally-weighted objects as being heavier. Here we investigated the contribution of visual experience to the development of these phenomena. We compared three groups: early blind individuals (no visual experience), individuals treated for congenital cataracts years after birth (late visual experience), and typically sighted controls (visual experience from birth). We found that, in cataract-treated participants (tested visually/visuo-haptically), both phenomena are absent shortly after sight onset, just like in blind individuals (tested haptically). However, they emerge within months following surgery, becoming statistically indistinguishable from the sighted controls. This suggests a pivotal role of visual experience and refutes the existence of an early sensitive period: A short period of experience, even when gained only years after birth, is sufficient for participants to visually pick-up regularities in the environment, contributing to the development of these phenomena.


Subject(s)
Cataract , Eye Abnormalities , Illusions , Humans , Vision Disorders , Vision, Ocular , Phonetics , Blindness/congenital
3.
Curr Biol ; 33(10): 2104-2110.e4, 2023 05 22.
Article in English | MEDLINE | ID: mdl-37130520

ABSTRACT

We investigated whether early visual input is essential for establishing the ability to use predictions in the control of actions and for perception. To successfully interact with objects, it is necessary to pre-program bodily actions such as grasping movements (feedforward control). Feedforward control requires a model for making predictions, which is typically shaped by previous sensory experience and interaction with the environment.1 Vision is the most crucial sense for establishing such predictions.2,3 We typically rely on visual estimations of the to-be-grasped object's size and weight in order to scale grip force and hand aperture accordingly.4,5,6 Size-weight expectations play a role also for perception, as evident in the size-weight illusion (SWI), in which the smaller of two equal-weight objects is misjudged to be heavier.7,8 Here, we investigated predictions for action and perception by testing the development of feedforward controlled grasping and of the SWI in young individuals surgically treated for congenital cataracts several years after birth. Surprisingly, what typically developing individuals do easily within the first years of life, namely to adeptly grasp new objects based on visually predicted properties, cataract-treated individuals did not learn after years of visual experience. Contrary, the SWI exhibited significant development. Even though the two tasks differ in substantial ways, these results may suggest a potential dissociation in using visual experience to make predictions about an object's features for perception or action. What seems a very simple task-picking up small objects-is in truth a highly complex computation that necessitates early structured visual input to develop.


Subject(s)
Cataract , Illusions , Humans , Psychomotor Performance , Vision Disorders , Hand , Movement , Blindness/congenital , Visual Perception
4.
J Vis ; 23(4): 6, 2023 04 03.
Article in English | MEDLINE | ID: mdl-37097225

ABSTRACT

We aimed to advance our understanding of local-global preference by exploring its developmental path within and across sensory modalities: vision and haptics. Neurotypical individuals from six years of age through adulthood completed a similarity judgement task with hierarchical haptic or visual stimuli made of local elements (squares or triangles) forming a global shape (a square or a triangle). Participants chose which of two probes was more similar to a target: the one sharing the global shape (but different local shapes) or the one with the same local shapes (but different global shape). Across trials, we independently varied the size of the local elements and that of the global configuration-the latter was varied by manipulating local element density while keeping their numerosity constant. We found that the size of local elements (but not global size) modulates the effects of age and modality. For stimuli with smaller local elements, the proportion of global responses increased with age and was similar for visual and haptic stimuli. However, for stimuli made of our largest local elements, the global preference was reduced or absent, particularly in haptics, regardless of age. These results suggest that vision and haptics progressively converge toward similar global preference with age, but residual differences across modalities and across individuals may be observed, depending on the characteristics of the stimuli.


Subject(s)
Haptic Technology , Vision, Ocular , Humans , Visual Perception/physiology
5.
Neuroimage ; 274: 120141, 2023 07 01.
Article in English | MEDLINE | ID: mdl-37120043

ABSTRACT

A brief period of monocular deprivation (MD) induces short-term plasticity of the adult visual system. Whether MD elicits neural changes beyond visual processing is yet unclear. Here, we assessed the specific impact of MD on neural correlates of multisensory processes. Neural oscillations associated with visual and audio-visual processing were measured for both the deprived and the non-deprived eye. Results revealed that MD changed neural activities associated with visual and multisensory processes in an eye-specific manner. Selectively for the deprived eye, alpha synchronization was reduced within the first 150 ms of visual processing. Conversely, gamma activity was enhanced in response to audio-visual events only for the non-deprived eye within 100-300 ms after stimulus onset. The analysis of gamma responses to unisensory auditory events revealed that MD elicited a crossmodal upweight for the non-deprived eye. Distributed source modeling suggested that the right parietal cortex played a major role in neural effects induced by MD. Finally, visual and audio-visual processing alterations emerged for the induced component of the neural oscillations, indicating a prominent role of feedback connectivity. Results reveal the causal impact of MD on both unisensory (visual and auditory) and multisensory (audio-visual) processes and, their frequency-specific profiles. These findings support a model in which MD increases excitability to visual events for the deprived eye and audio-visual and auditory input for the non-deprived eye.


Subject(s)
Visual Cortex , Adult , Humans , Visual Cortex/physiology , Visual Perception , Sensory Deprivation/physiology , Neuronal Plasticity/physiology , Vision, Monocular/physiology
6.
J Exp Psychol Gen ; 152(2): 448-463, 2023 Feb.
Article in English | MEDLINE | ID: mdl-36048056

ABSTRACT

Visual landmarks provide crucial information for human navigation. But what characteristics define a landmark? To be uniquely recognized, a landmark should be distinctive and salient, while providing precise and accurate positional information. It should also be permanent. For example, to find back to your car, a nearby church seems a better landmark compared with a distinct truck or bicycle, because you learned that there is a chance that these objects might move. To this end, we investigated human learning of landmark permanency for navigation while treating spatiotemporal permanency as a probabilistic property. We hypothesized that humans will be able to learn the probabilistic nature of landmark permanency and assign higher weight to more permanent landmarks. To test this hypothesis, we designed a homing task where participants had to return to a position that was surrounded by three landmarks. In the learning phase we manipulated the spatiotemporal permanency of one landmark by secretly repositioning it before participants returned home. In the test phase, we investigated the weight allocated to the nonpermanent landmark by analyzing its influence on the navigational performance during homing. We conducted four experiments: In the first two experiments we altered the statistics of permanency and accordingly found an influence on participants' behavior, nonpermanent objects were used less for finding home. In the last two experiments we investigated the role of short-term learning of novel statistics versus long-term knowledge about such statistics. No carry-over effects in Experiment 3 and very little influence of object identity with different long-term permanency characteristics in Experiment 4 revealed a dominance of short-term learning over the use of long-term a priori knowledge about object permanency. This indicates that long-term prior beliefs are quickly updated by the current permanency statistics. Taken together, consistent with a Bayesian account for navigation these results indicate that humans quickly learn and update the statistics of landmark permanency and use it in an effective way, assigning gradually more weight to the more permanent landmark and making it more important for navigation. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Subject(s)
Learning , Spatial Navigation , Humans , Bayes Theorem , Awareness , Space Perception
7.
Elife ; 112022 10 24.
Article in English | MEDLINE | ID: mdl-36278872

ABSTRACT

Being able to perform adept goal-directed actions requires predictive, feed-forward control, including a mapping between the visually estimated target locations and the motor commands reaching for them. When the mapping is perturbed, e.g., due to muscle fatigue or optical distortions, we are quickly able to recalibrate the sensorimotor system to update this mapping. Here, we investigated whether early visual and visuomotor experience is essential for developing sensorimotor recalibration. To this end, we assessed young individuals deprived of pattern vision due to dense congenital bilateral cataracts who were surgically treated for sight restoration only years after birth. We compared their recalibration performance to such distortion to that of age-matched sighted controls. Their sensorimotor recalibration performance was impaired right after surgery. This finding cannot be explained by their still lower visual acuity alone, since blurring vision in controls to a matching degree did not lead to comparable behavior. Nevertheless, the recalibration ability of cataract-treated participants gradually improved with time after surgery. Thus, the lack of early pattern vision affects visuomotor recalibration. However, this ability is not lost but slowly develops after sight restoration, highlighting the importance of sensorimotor experience gained late in life.


Subject(s)
Cataract , Humans , Cataract/congenital , Vision, Ocular , Visual Acuity , Visual Perception/physiology
8.
IEEE Trans Haptics ; 15(4): 693-704, 2022.
Article in English | MEDLINE | ID: mdl-36149999

ABSTRACT

Multiple cues contribute to the discrimination of slip motion speed by touch. In our previous article, we demonstrated that masking vibrations at various frequencies impaired the discrimination of speed. In this article, we extended the previous results to evaluate this phenomenon on a smooth glass surface, and for different values of contact force and duration of the masking stimulus. Speed discrimination was significantly impaired by masking vibrations at high but not at low contact force. Furthermore, a short pulse of masking vibrations at motion onset produced a similar effect as the long masking stimulus, delivered throughout slip motion duration. This last result suggests that mechanical events at motion onset provide important cues to the discrimination of speed.


Subject(s)
Motion Perception , Touch Perception , Humans , Touch , Vibration , Motion
9.
Front Psychol ; 13: 906643, 2022.
Article in English | MEDLINE | ID: mdl-35800945

ABSTRACT

Over the last few years online platforms for running psychology experiments beyond simple questionnaires and surveys have become increasingly popular. This trend has especially increased after many laboratory facilities had to temporarily avoid in-person data collection following COVID-19-related lockdown regulations. Yet, while offering a valid alternative to in-person experiments in many cases, platforms for online experiments are still not a viable solution for a large part of human-based behavioral research. Two situations in particular pose challenges: First, when the research question requires design features or participant interaction which exceed the customization capability provided by the online platform; and second, when variation among hardware characteristics between participants results in an inadmissible confounding factor. To mitigate the effects of these limitations, we developed ReActLab (Remote Action Laboratory), a framework for programming remote, browser-based experiments using freely available and open-source JavaScript libraries. Since the experiment is run entirely within the browser, our framework allows for portability to any operating system and many devices. In our case, we tested our approach by running experiments using only a specific model of Android tablet. Using ReActLab with this standardized hardware allowed us to optimize our experimental design for our research questions, as well as collect data outside of laboratory facilities without introducing setup variation among participants. In this paper, we describe our framework and show examples of two different experiments carried out with it: one consisting of a visuomotor adaptation task, the other of a visual localization task. Through comparison with results obtained from similar tasks in in-person laboratory settings, we discuss the advantages and limitations for developing browser-based experiments using our framework.

10.
Nat Commun ; 13(1): 2489, 2022 05 05.
Article in English | MEDLINE | ID: mdl-35513362

ABSTRACT

Neural mechanisms that arbitrate between integrating and segregating multisensory information are essential for complex scene analysis and for the resolution of the multisensory correspondence problem. However, these mechanisms and their dynamics remain largely unknown, partly because classical models of multisensory integration are static. Here, we used the Multisensory Correlation Detector, a model that provides a good explanatory power for human behavior while incorporating dynamic computations. Participants judged whether sequences of auditory and visual signals originated from the same source (causal inference) or whether one modality was leading the other (temporal order), while being recorded with magnetoencephalography. First, we confirm that the Multisensory Correlation Detector explains causal inference and temporal order behavioral judgments well. Second, we found strong fits of brain activity to the two outputs of the Multisensory Correlation Detector in temporo-parietal cortices. Finally, we report an asymmetry in the goodness of the fits, which were more reliable during the causal inference task than during the temporal order judgment task. Overall, our results suggest the existence of multisensory correlation detectors in the human brain, which explain why and how causal inference is strongly driven by the temporal correlation of multisensory signals.


Subject(s)
Auditory Perception , Visual Perception , Acoustic Stimulation , Brain , Humans , Magnetoencephalography , Parietal Lobe , Photic Stimulation
11.
Curr Biol ; 31(21): 4879-4885.e6, 2021 11 08.
Article in English | MEDLINE | ID: mdl-34534443

ABSTRACT

Adult humans make effortless use of multisensory signals and typically integrate them in an optimal fashion.1 This remarkable ability takes many years for normally sighted children to develop.2,3 Would individuals born blind or with extremely low vision still be able to develop multisensory integration later in life when surgically treated for sight restoration? Late acquisition of such capability would be a vivid example of the brain's ability to retain high levels of plasticity. We studied the development of multisensory integration in individuals suffering from congenital dense bilateral cataract, surgically treated years after birth. We assessed cataract-treated individuals' reliance on their restored visual abilities when estimating the size of an object simultaneously explored by touch. Within weeks to months after surgery, when combining information from vision and touch, they developed a multisensory weighting behavior similar to matched typically sighted controls. Next, we tested whether cataract-treated individuals benefited from integrating vision with touch by increasing the precision of size estimates, as it occurs when integrating signals in a statistically optimal fashion.1 For participants retested multiple times, such a benefit developed within months after surgery to levels of precision indistinguishable from optimal behavior. To summarize, the development of multisensory integration does not merely depend on age, but requires extensive multisensory experience with the world, rendered possible by the improved post-surgical visual acuity. We conclude that early exposure to multisensory signals is not essential for the development of multisensory integration, which can still be acquired even after many years of visual deprivation.


Subject(s)
Cataract , Touch Perception , Adult , Cataract/congenital , Child , Humans , Touch , Vision, Ocular , Visual Perception
12.
J Neurophysiol ; 126(2): 540-549, 2021 08 01.
Article in English | MEDLINE | ID: mdl-34259048

ABSTRACT

During a smooth pursuit eye movement of a target stimulus, a briefly flashed stationary background appears to move in the opposite direction as the eye's motion-an effect known as the Filehne illusion. Similar illusions occur in audition, in the vestibular system, and in touch. Recently, we found that the movement of a surface perceived from tactile slip was biased if this surface was sensed with the moving hand. The analogy between these two illusions suggests similar mechanisms of motion processing between the vision and touch. In the present study, we further assessed the interplay between these two sensory channels by investigating a novel paradigm that associated an eye pursuit of a visual target with a tactile motion over the skin of the fingertip. We showed that smooth pursuit eye movements can bias the perceived direction of motion in touch. Similarly to the classical report from the Filehne illusion in vision, a static tactile surface was perceived as moving rightward with a leftward eye pursuit movement, and vice versa. However, this time the direction of surface motion was perceived from touch. The biasing effects of eye pursuit on tactile motion were modulated by the reliability of the tactile and visual stimuli, consistently with a Bayesian model of motion perception. Overall, these results support a modality- and effector-independent process with common representations for motion perception.NEW & NOTEWORTHY The study showed that smooth pursuit eye movement produces a bias in tactile motion perception. This phenomenon is modulated by the reliability of the tactile estimate and by the presence of a visual background, in line with the predictions of the Bayesian framework of motion perception. Overall, these results support the hypothesis of shared representations for motion perception.


Subject(s)
Motion Perception , Pursuit, Smooth , Touch Perception , Adult , Female , Fingers/physiology , Humans , Male , Psychomotor Performance , Touch
14.
Front Psychol ; 12: 612558, 2021.
Article in English | MEDLINE | ID: mdl-33643139

ABSTRACT

Whenever we grasp and lift an object, our tactile system provides important information on the contact location and the force exerted on our skin. The human brain integrates signals from multiple sites for a coherent representation of object shape, inertia, weight, and other material properties. It is still an open question whether the control of grasp force occurs at the level of individual fingers or whether it is also influenced by the control and the signals from the other fingers of the same hand. In this work, we approached this question by asking participants to lift, transport, and replace a sensorized object, using three- and four-digit grasp. Tactile input was altered by covering participant's fingertips with a rubber thimble, which reduced the reliability of the tactile sensory input. In different experimental conditions, we covered between one and three fingers opposing the thumb. Normal forces at each finger and the thumb were recorded while grasping and holding the object, with and without the thimble. Consistently with previous studies, reducing tactile sensitivity increased the overall grasping force. The gasping force increased in the covered finger, whereas it did not change from baseline in the remaining bare fingers (except the thumb for equilibrium constraints). Digit placement and object tilt were not systematically affected by rubber thimble conditions. Our results suggest that, in each finger opposing thumb, digit normal force is controlled locally in response to the applied tactile perturbation.

15.
PLoS One ; 15(7): e0236824, 2020.
Article in English | MEDLINE | ID: mdl-32735569

ABSTRACT

In our daily life, we often interact with objects using both hands raising the question the question to what extent information between the hands is shared. It has, for instance, been shown that curvature adaptation aftereffects can transfer from the adapted hand to the non-adapted hand. However, this transfer only occurred for dynamic exploration, e.g. by moving a single finger over a surface, but not for static exploration when keeping static contact with the surface and combining the information from different parts of the hand. This raises the question to what extent adaptation to object shape is shared between the hands when both hands are used in static fashion simultaneously and the object shape estimates require information from both hands. Here we addressed this question in three experiments using a slant adaptation paradigm. In Experiment 1 we investigated whether an aftereffect of static bimanual adaptation occurs at all and whether it transfers to conditions in which one hand was moving. In Experiment 2 participants adapted either to a felt slanted surface or simply be holding their hands in mid-air at similar positions, to investigate to what extent the effects of static bimanual adaptation are posture-based rather than object based. Experiment 3 further explored the idea that bimanual adaptation is largely posture based. We found that bimanual adaptation using static touch did lead to aftereffects when using the same static exploration mode for testing. However, the aftereffect did not transfer to any exploration mode that included a dynamic component. Moreover, we found similar aftereffects both with and without a haptic surface. Thus, we conclude that static bimanual adaptation is of proprioceptive nature and does not occur at the level at which the object is represented.


Subject(s)
Hand/physiology , Psychomotor Performance , Touch Perception/physiology , Adaptation, Physiological , Adult , Female , Humans , Male , Postural Balance/physiology , Touch , Young Adult
16.
PLoS Comput Biol ; 16(7): e1008020, 2020 07.
Article in English | MEDLINE | ID: mdl-32678847

ABSTRACT

Adaptation to statistics of sensory inputs is an essential ability of neural systems and extends their effective operational range. Having a broad operational range facilitates to react to sensory inputs of different granularities, thus is a crucial factor for survival. The computation of auditory cues for spatial localization of sound sources, particularly the interaural level difference (ILD), has long been considered as a static process. Novel findings suggest that this process of ipsi- and contra-lateral signal integration is highly adaptive and depends strongly on recent stimulus statistics. Here, adaptation aids the encoding of auditory perceptual space of various granularities. To investigate the mechanism of auditory adaptation in binaural signal integration in detail, we developed a neural model architecture for simulating functions of lateral superior olive (LSO) and medial nucleus of the trapezoid body (MNTB) composed of single compartment conductance-based neurons. Neurons in the MNTB serve as an intermediate relay population. Their signal is integrated by the LSO population on a circuit level to represent excitatory and inhibitory interactions of input signals. The circuit incorporates an adaptation mechanism operating at the synaptic level based on local inhibitory feedback signals. The model's predictive power is demonstrated in various simulations replicating physiological data. Incorporating the innovative adaptation mechanism facilitates a shift in neural responses towards the most effective stimulus range based on recent stimulus history. The model demonstrates that a single LSO neuron quickly adapts to these stimulus statistics and, thus, can encode an extended range of ILDs in the ipsilateral hemisphere. Most significantly, we provide a unique measurement of the adaptation efficacy of LSO neurons. Prerequisite of normal function is an accurate interaction of inhibitory and excitatory signals, a precise encoding of time and a well-tuned local feedback circuit. We suggest that the mechanisms of temporal competitive-cooperative interaction and the local feedback mechanism jointly sensitize the circuit to enable a response shift towards contra-lateral and ipsi-lateral stimuli, respectively.


Subject(s)
Computational Biology , Neurons/physiology , Olivary Nucleus/physiology , Synapses/physiology , Trapezoid Body/physiology , Acoustic Stimulation , Action Potentials , Algorithms , Animals , Auditory Pathways/physiology , Auditory Threshold , Computer Simulation , Cues , Gerbillinae , Humans , Models, Neurological , Normal Distribution , Receptors, GABA/physiology , Reproducibility of Results , Sound , Sound Localization , Superior Olivary Complex/physiology
17.
Front Neurorobot ; 14: 29, 2020.
Article in English | MEDLINE | ID: mdl-32499692

ABSTRACT

While interacting with the world our senses and nervous system are constantly challenged to identify the origin and coherence of sensory input signals of various intensities. This problem becomes apparent when stimuli from different modalities need to be combined, e.g., to find out whether an auditory stimulus and a visual stimulus belong to the same object. To cope with this problem, humans and most other animal species are equipped with complex neural circuits to enable fast and reliable combination of signals from various sensory organs. This multisensory integration starts in the brain stem to facilitate unconscious reflexes and continues on ascending pathways to cortical areas for further processing. To investigate the underlying mechanisms in detail, we developed a canonical neural network model for multisensory integration that resembles neurophysiological findings. For example, the model comprises multisensory integration neurons that receive excitatory and inhibitory inputs from unimodal auditory and visual neurons, respectively, as well as feedback from cortex. Such feedback projections facilitate multisensory response enhancement and lead to the commonly observed inverse effectiveness of neural activity in multisensory neurons. Two versions of the model are implemented, a rate-based neural network model for qualitative analysis and a variant that employs spiking neurons for deployment on a neuromorphic processing. This dual approach allows to create an evaluation environment with the ability to test model performances with real world inputs. As a platform for deployment we chose IBM's neurosynaptic chip TrueNorth. Behavioral studies in humans indicate that temporal and spatial offsets as well as reliability of stimuli are critical parameters for integrating signals from different modalities. The model reproduces such behavior in experiments with different sets of stimuli. In particular, model performance for stimuli with varying spatial offset is tested. In addition, we demonstrate that due to the emergent properties of network dynamics model performance is close to optimal Bayesian inference for integration of multimodal sensory signals. Furthermore, the implementation of the model on a neuromorphic processing chip enables a complete neuromorphic processing cascade from sensory perception to multisensory integration and the evaluation of model performance for real world inputs.

18.
J Neurophysiol ; 122(4): 1555-1565, 2019 10 01.
Article in English | MEDLINE | ID: mdl-31314634

ABSTRACT

In vision, the perceived velocity of a moving stimulus differs depending on whether we pursue it with the eyes or not: A stimulus moving across the retina with the eyes stationary is perceived as being faster compared with a stimulus of the same physical speed that the observer pursues with the eyes, while its retinal motion is zero. This effect is known as the Aubert-Fleischl phenomenon. Here, we describe an analog phenomenon in touch. We asked participants to estimate the speed of a moving stimulus either from tactile motion only (i.e., motion across the skin), while keeping the hand world stationary, or from kinesthesia only by tracking the stimulus with a guided arm movement, such that the tactile motion on the finger was zero (i.e., only finger motion but no movement across the skin). Participants overestimated the velocity of the stimulus determined from tactile motion compared with kinesthesia in analogy with the visual Aubert-Fleischl phenomenon. In two follow-up experiments, we manipulated the stimulus noise by changing the texture of the touched surface. Similarly to the visual phenomenon, this significantly affected the strength of the illusion. This study supports the hypothesis of shared computations for motion processing between vision and touch.NEW & NOTEWORTHY In vision, the perceived velocity of a moving stimulus is different depending on whether we pursue it with the eyes or not, an effect known as the Aubert-Fleischl phenomenon. We describe an analog phenomenon in touch. We asked participants to estimate the speed of a moving stimulus either from tactile motion or by pursuing it with the hand. Participants overestimated the stimulus velocity measured from tactile motion compared with kinesthesia, in analogy with the visual Aubert-Fleischl phenomenon.


Subject(s)
Illusions/physiology , Kinesthesis , Motion Perception , Touch Perception , Adult , Brain/physiology , Eye Movements , Female , Humans , Male , Touch
19.
PLoS Comput Biol ; 15(3): e1006676, 2019 03.
Article in English | MEDLINE | ID: mdl-30835770

ABSTRACT

The plasticity of the human nervous system allows us to acquire an open-ended repository of sensorimotor skills in adulthood, such as the mastery of tools, musical instruments or sports. How novel sensorimotor skills are learned from scratch is yet largely unknown. In particular, the so-called inverse mapping from goal states to motor states is underdetermined because a goal can often be achieved by many different movements (motor redundancy). How humans learn to resolve motor redundancy and by which principles they explore high-dimensional motor spaces has hardly been investigated. To study this question, we trained human participants in an unfamiliar and redundant visually-guided manual control task. We qualitatively compare the experimental results with simulation results from a population of artificial agents that learned the same task by Goal Babbling, which is an inverse-model learning approach for robotics. In Goal Babbling, goal-related feedback guides motor exploration and thereby enables robots to learn an inverse model directly from scratch, without having to learn a forward model first. In the human experiment, we tested whether different initial conditions (starting positions of the hand) influence the acquisition of motor synergies, which we identified by Principal Component Analysis in the motor space. The results show that the human participants' solutions are spatially biased towards the different starting positions in motor space and are marked by a gradual co-learning of synergies and task success, similar to the dynamics of motor learning by Goal Babbling. However, there are also differences between human learning and the Goal Babbling simulations, as humans tend to predominantly use Degrees of Freedom that do not have a large effect on the hand position, whereas in Goal Babbling, Degrees of Freedom with a large effect on hand position are used predominantly. We conclude that humans use goal-related feedback to constrain motor exploration and resolve motor redundancy when learning a new sensorimotor mapping, but in a manner that differs from the current implementation of Goal Babbling due to different constraints on motor exploration.


Subject(s)
Feedback , Goals , Motor Skills , Adult , Biomechanical Phenomena , Humans , Robotics
20.
Front Robot AI ; 6: 43, 2019.
Article in English | MEDLINE | ID: mdl-33501059

ABSTRACT

Feedback is essential for skill acquisition as it helps identifying and correcting performance errors. Nowadays, Virtual Reality can be used as a tool to guide motor learning, and to provide innovative types of augmented feedback that exceed real world opportunities. Concurrent feedback has shown to be especially beneficial for novices. Moreover, watching skilled performances helps novices to acquire a motor skill, and this effect depends on the perspective taken by the observer. To date, however, the impact of watching one's own performance together with full body superimposition of a skilled performance, either from the front or from the side, remains to be explored. Here we used an immersive, state-of-the-art, low-latency cave automatic virtual environment (CAVE), and we asked novices to perform squat movements in front of a virtual mirror. Participants were assigned to one of three concurrent visual feedback groups: participants either watched their own avatar performing full body movements or were presented with the movement of a skilled individual superimposed on their own performance during movement execution, either from a frontal or from a side view. Motor performance and cognitive representation were measured in order to track changes in movement quality as well as motor memory across time. Consistent with our hypotheses, results showed an advantage of the groups that observed their own avatar performing the squat together with the superimposed skilled performance for some of the investigated parameters, depending on perspective. Specifically, for the deepest point of the squat, participants watching the squat from the front adapted their height, while those watching from the side adapted their backward movement. In a control experiment, we ruled out the possibility that the observed improvements were due to the mere fact of performing the squat movements-irrespective of the type of visual feedback. The present findings indicate that it can be beneficial for novices to watch themselves together with a skilled performance during execution, and that improvement depends on the perspective chosen.

SELECTION OF CITATIONS
SEARCH DETAIL
...